The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because
In the mathematical field of numerical analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.
Contents |
One commonly distinguishes between the relative error and the absolute error. The absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is the relative error expressed in terms of per 100.
As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002. The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 and in the second it is only 0.000003.
Another example would be if you measured a beaker and read, 5mL. The correct reading would have been 6mL. This means that your % error (Approximate error) would be 16.66666.. % error.
Given some value v and its approximation vapprox, the absolute error is
where the vertical bars denote the absolute value. If : the relative error is
and the percent error is
These definitions can be extended to the case when and are n-dimensional vectors, by replacing the absolute value with an n-norm.[1]
In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[2]